-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add deps-tf1 and docker-cuda-tf1 #1186
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense and LGTM. I'm testing to verify and will merge soon, so we can test the deployment.
Unfortunately, in contrast to when I first came up with the deps-cuda solution, the sweet spot where we could easily combine CUDA dependencies for both Torch and TF seems to have vanished now, resulting in a much larger image. It looks like an older version of nvidia-tensorflow might work better, but I'll have to do further analysis. |
Update: we definitely need to hold at Unfortunately, the breaking changes with recent Numpy pose an additional difficulty: obviously, the older releases of TF etc are incompatible and so Numpy must now be held at The whole situation will eventually get easier, with TF starting to require its CUDA dependencies explicitly, and not requiring Conda anymore, but instead allowing |
I think I did find a workable compromise again. Let's see what the CD says. Looking at this horde of CI jobs: shouldn't we rule out these tests if the only changes affect the dockerfiles (or docker recipes)? |
I think we really need this. Even if we don't use the variant images ( |
I resolved the conflict, but the CI failure seems unrelated – we probably just need to update test assets... |
Guessed right. |
this would allow specifying
FROM ocrd/core-cuda-tf1
for all modules depending on Tensorflow 1 – so this (huge!) Docker layer can be sharedsame could be worked out for TF2 and Pytorch.